128 research outputs found

    Optimal relaxed causal sampler using sampled-date system theory

    Get PDF
    This paper studies the design of an optimal relaxed causal sampler using sampled data system theory. A lifted frequency domain approach is used to obtain the existence conditions and the optimal sampler. A state space formulation of the results is also provided. The resulting optimal relaxed causal sampler is a cascade of a linear continuous time system followed by a generalized sampler and a discrete system

    User-Behavior-Guided Dynamic Loaders in Embedded Operating Systems

    Get PDF
    This publication describes a new user-behavior-guided dynamic loader in embedded operating systems (OS). It is well-understood that embedded devices, such as smartphones, are constrained in their random-access memory (RAM) resource. To better utilize the RAM resource, developers use shared libraries when building their application software (often referred to as apps or applications). Shared libraries reduce the memory footprint of individual application software. However, the utilization of shared libraries, even though beneficial in minimizing the volatile memory footprint, comes at a performance cost. The performance is improved, however, when shared libraries are pre-loaded before the application software is accessed by the end user. Current OS platforms lack the heuristic guided approach to predict which shared libraries are going to be needed at the time the end user accesses the application software. To this end, a new dynamic loader is developed to help predict the pre-loading of the needed shared libraries. To enable the OS to predict and pre-load shared libraries tailored to the end user, the new user-behavior-guided dynamic loader employs three components: user embedding, current time, and current location. To improve the performance of the dynamic loader, federated learning is utilized to democratize the computational power needed and benefit from each end user’s input data. By so doing, the described techniques optimize the prediction of the relevant shared libraries to be pre-loaded, while protecting the end user’s privacy. Consequently, user-behavior-guided dynamic loaders reduce the memory pressure of the embedded devices, while optimizing the performance of these devices

    Distributed Transfer Learning on Embedded Devices

    Get PDF
    An Internet-of-Things (IoT) platform that enables the retraining of machine learning models on embedded devices is described. The IoT platform utilizes transfer learning to retrain models in a cluster of IoT products connected to each-other in a local-area network (LAN), personal-area network (PAN), or wireless personal-area network (WPAN), to be reused for a similar purpose. Unlike current IoT platforms, the distributed transfer learning IoT platform does not need to utilize a centralized computing system, such as a cloud-computing server or a network server to perform model training, but rather execute this training in the cluster of IoT products. To reach this goal, in addition to transfer learning, the described IoT platform supports application programming interfaces (APIs) that specify a small portion of the existing pretrained model to be retrained, specify a data pipeline in the cluster of IoT devices to be used to retrain the model, and tune the model

    PEER LEARNING ON THE EDGE IN VEHICLES

    Get PDF
    A vehicle head unit may train a surround-view (SV) detection module to rectify distortions in fish-eye camera images of the surroundings of a vehicle by comparing the object (e.g., traffic signs, lane markings, etc.) detection results of the SV detection module with those of an advanced driver assistance system (ADAS) detection module (e.g., while the SV detection module and the ADAS detection module are detecting the same objects of the same scenery). The vehicle head unit may receive the object detection results of the ADAS detection module by using one or more communication processes. For example, the vehicle head unit may use the object detection results of the ADAS detection module as ground truth data for training the SV detection module. The vehicle head unit may then update parameters, weights, and/or the like of the SV detection module to decrease the difference between the object detection results of the SV detection module and ADAS detection module. In some examples, the vehicle head unit may send (potentially after anonymizing personally identifiable information) the updated parameters, weights, and/or the like of the SV detection module to a remote computing system (e.g., a cloud server) to train a machine learning model that implements SV detection modules. The machine learning model may be trained using the collective updated parameters, weights, and/or the like of multiple SV detection modules

    Map building fusing acoustic and visual information using autonomous underwater vehicles

    Get PDF
    Author Posting. © The Author(s), 2012. This is the author's version of the work. It is posted here by permission of John Wiley & Sons for personal use, not for redistribution. The definitive version was published in Journal of Field Robotics 30 (2013): 763–783, doi:10.1002/rob.21473.We present a system for automatically building 3-D maps of underwater terrain fusing visual data from a single camera with range data from multibeam sonar. The six-degree of freedom location of the camera relative to the navigation frame is derived as part of the mapping process, as are the attitude offsets of the multibeam head and the on-board velocity sensor. The system uses pose graph optimization and the square root information smoothing and mapping framework to simultaneously solve for the robot’s trajectory, the map, and the camera location in the robot’s frame. Matched visual features are treated within the pose graph as images of 3-D landmarks, while multibeam bathymetry submap matches are used to impose relative pose constraints linking robot poses from distinct tracklines of the dive trajectory. The navigation and mapping system presented works under a variety of deployment scenarios, on robots with diverse sensor suites. Results of using the system to map the structure and appearance of a section of coral reef are presented using data acquired by the Seabed autonomous underwater vehicle.The work described herein was funded by the National Science Foundation Censsis ERC under grant number EEC-9986821, and by the National Oceanic and Atmospheric Administration under grant number NA090AR4320129

    FISH_ROCK : a tool for identifying and counting benthic organisms in bottom photographs

    Get PDF
    Recent advances in underwater robotics and imaging technology now enable the rapid acquisition of large datasets of near-bottom high-resolution digital imagery. These images provide the potential for developing a non-invasive technique for fisheries data acquisition that reveals the organisms in their natural habitat and can be used to identify important habitat characteristics. Using these large datasets effectively, however, requires the development of computer-based techniques that increase the efficiency of data analysis. This document describes one such tool, FISH_ROCK, which was developed for a group of fisheries researchers using the SeaBED AUV during a research cruise in October 2005. FISH_ROCK is a graphical user interface (GUI) that is executed within Matlab, and allows users digitally generate a database that includes organism identification, quantity, size and distribution as well as details about their habitat. Further development of this GUI will enable its use in different oceanographic environments including the deep sea, and will include modules that perform data analysis.Funding was provided by the National Oceanic and Atmospheric Administration under Grant No. AB133F05SE5828

    Bayesian Decision Making to Localize Visual Queries in 2D

    Full text link
    This report describes our approach for the EGO4D 2023 Visual Query 2D Localization Challenge. Our method aims to reduce the number of False Positives (FP) that occur because of high similarity between the visual crop and the proposed bounding boxes from the baseline's Region Proposal Network (RPN). Our method uses a transformer to determine similarity in higher dimensions which is used as our prior belief. The results are then combined together with the similarity in lower dimensions from the Siamese Head, acting as our measurement, to generate a posterior which is then used to determine the final similarity of the visual crop with the proposed bounding box. Our code is publicly available \href\href{https://github.com/s-m-asjad/EGO4D_VQ2D}{here}.Comment: Report for the EGO4D 2023 Visual Query 2D Localization Challeng
    corecore